This script takes a deep dive into Landsat 8 labels for a more rigorous analysis of inconsistent band data and outliers in the filtered label dataset. Here we will determine if any more label data points should be removed from the training dataset and whether or not we can glean anything from the metadata in the outlier dataset to be able to pre-emptively toss out scenes when we go to apply the classification algorithm.
harmonize_version = "v2024-04-17"
outlier_version = "v2024-04-17"
LS8 <- read_rds(paste0("data/labels/harmonized_LS89_labels_", harmonize_version, ".RDS")) %>%
filter(mission == "LANDSAT_8")
Just look at the data to see consistent (or inconsistent) user-pulled data and our pull, here, our user data are in “BX” format and the re-pull is in “SR_BX” format. These are steps to assure data quality if the volunteer didn’t follow the directions explicitly.
pmap(.l = list(user_band = LS89_user,
ee_band = LS89_ee,
data = list(LS8),
mission = list("LANDSAT_8")),
.f = make_band_comp_plot)
## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
There isn’t a ton of mis-match here, we’ll just use B7/SR_B7 as a reference to filter inconsistent labels
LS8_inconsistent <- LS8 %>%
filter((is.na(SR_B7) | SR_B7 != B7))
LS8_inconsistent %>%
group_by(class) %>%
summarise(n_labels = n()) %>%
kable()
| class | n_labels |
|---|---|
| cloud | 5 |
| lightNearShoreSediment | 1 |
| offShoreSediment | 3 |
| shorelineContamination | 1 |
Most of these are cloud labels, where the pixel is saturated, and then masked in the re-pull value (resulting in an NA). Let’s drop those from this subset and then look more.
LS8_inconsistent <- LS8_inconsistent %>%
filter(!(class == "cloud" & is.na(SR_B7)))
This leaves 0.9% of the Landsat 7 labels as inconsistent. Let’s do a quick sanity check to make sure that we’ve dropped values that are inconsistent between pulls:
LS8_filtered <- LS8 %>%
filter(# filter data where SR_B7 has data and where the values match between the two
# pulls.
(!is.na(SR_B7) & SR_B7 == B7) |
# or where the user-specified class is cloud and the pixel was saturated
# providing no surface refelctance data
(class == "cloud" & is.na(SR_B7)),
# or where any re-pulled band value is greater than 1, which isn't a valid value
if_all(LS89_ee,
~ . <= 1))
And plot:
## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
Looks good!
And now let’s look at the data by class:
## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
We aren’t actually modeling “other” (not sufficient observations to classify) or “shorelineContamination” (we’ll use this later to block areas where there is likely shoreline contamination in the AOI). Additionally, the “algalBloom” labels don’t have sufficient n (nor do we think these are necessarily visible), so let’s drop those categories and look at the data again.
LS8_for_class_analysis <- LS8_filtered %>%
filter(!(class %in% c("other", "shorelineContamination", "algalBloom")))
## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
Interesting - the classes look really similar in distribution (maybe because cloud categories are so high). It will be interesting to see if there are statistical differences.
Let’s also go back and check to see if there is any pattern to the inconsistent labels.
| vol_init | n_tot_labs | n_dates |
|---|---|---|
| HAD | 2 | 2 |
| LRCP | 3 | 2 |
| MRB | 3 | 2 |
| SKS | 2 | 1 |
I’m not concerned about any systemic errors here that might require modified data handling for a specific scene or contributor, labels are spread amongst volunteers and scenes.
There are statistical outliers within this dataset and they may impact the interpretation of any statistical testing we do. Let’s see if we can narrow down when those outliers and/or glean anything from the outlier data that may be applicable to the the application of the algorithm. Outliers may be a systemic issue (as in the scene is an outlier), it could be a user issue (a user may have been a bad actor), or they just might be real. This section asks those questions. The “true outliers” that we dismiss from the dataset will also be used to help aid in interpretation/application of the algorithm across the Landsat stack, so it is important to make notes of any patterns we might see in the outlier dataset.
## [1] "Classes represented in outliers:"
## [1] "darkNearShoreSediment" "offShoreSediment" "openWater"
Okay, 21 outliers (>1.5*IQR) out of 989 - and they are all from non-cloud groups, and non of them are light near shore sediment.
How many of these outliers are in specific scenes?
LS8_out_date <- outliers %>%
group_by(date, vol_init) %>%
summarize(n_out = n())
LS8_date <- LS8_for_class_analysis %>%
filter(class != "cloud") %>%
group_by(date, vol_init) %>%
summarise(n_tot = n())
LS8_out_date <- left_join(LS8_out_date, LS8_date) %>%
mutate(percent_outlier = n_out/n_tot*100) %>%
arrange(-percent_outlier)
LS8_out_date %>%
kable()
| date | vol_init | n_out | n_tot | percent_outlier |
|---|---|---|---|---|
| 2022-11-21 | HAD | 10 | 12 | 83.3333333 |
| 2020-04-05 | AMP | 2 | 9 | 22.2222222 |
| 2022-08-01 | MRB | 5 | 31 | 16.1290323 |
| 2014-06-08 | LRCP | 2 | 193 | 1.0362694 |
| 2020-07-10 | MRB | 1 | 98 | 1.0204082 |
| 2020-08-11 | HAD | 1 | 135 | 0.7407407 |
There are three scenes here that have very high outliers - perhaps there is something about the AC in these particular scenes? or the general scene quality?
LS8_out_date %>%
filter(percent_outlier > 20) %>%
select(date, vol_init) %>%
left_join(., LS8) %>%
select(date, vol_init, CLOUD_COVER:DATA_SOURCE_WATER_VAPOR) %>%
distinct() %>%
kable()
| date | vol_init | CLOUD_COVER | IMAGE_QUALITY_OLI | IMAGE_QUALITY_TIRS | DATA_SOURCE_AIR_TEMPERATURE | DATA_SOURCE_ELEVATION | DATA_SOURCE_OZONE | DATA_SOURCE_PRESSURE | DATA_SOURCE_REANALYSIS | DATA_SOURCE_TIRS_STRAY_LIGHT_CORRECTION | DATA_SOURCE_WATER_VAPOR |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 2022-11-21 | HAD | 44.68, 44.68, 44.68, 44.68, 44.68 | 9, 9, 9, 9, 9 | 9, 9, 9, 9, 9 | MODIS | GLS2000 | MODIS | Calculated | GEOS-5 FP-IT | TIRS | MODIS |
| 2020-04-05 | AMP | 20.620, 20.620, 23.465, 26.310, 26.310 | 9, 9, 9, 9, 9 | 9, 9, 9, 9, 9 | MODIS | GLS2000 | MODIS | Calculated | GEOS-5 FP-IT | TIRS | MODIS |
Image quality is high across the board, but the 2022-11-21 image has pretty high cloud cover, and is almost entirely outliers. Let’s look at that scene:
Another case of high cloud cover and adjacent snow! We should definitely toss this scene. For consistency, let’s look at instances where outliers are in at least three bands for a given label:
| date | class | vol_init | user_label_id | n_bands_out | bands_out |
|---|---|---|---|---|---|
| 2022-08-01 | openWater | MRB | 1030 | 4 | SR_B2; SR_B3; SR_B4; SR_B5 |
| 2022-08-01 | openWater | MRB | 1031 | 4 | SR_B2; SR_B3; SR_B4; SR_B5 |
| 2022-08-01 | openWater | MRB | 1032 | 4 | SR_B2; SR_B3; SR_B4; SR_B5 |
| 2022-08-01 | openWater | MRB | 1033 | 4 | SR_B2; SR_B3; SR_B4; SR_B5 |
| 2022-08-01 | openWater | MRB | 1034 | 4 | SR_B2; SR_B3; SR_B4; SR_B5 |
Let’s group by image date and volunteer and tally up the number of labels where at least 3 bands where outliers:
| date | vol_init | n_labels |
|---|---|---|
| 2022-08-01 | MRB | 5 |
Interesting - let’s look at this scene too.
This scene has the weird green cloud situation happening and the south-extending NA data on the west side of the AOI. Let’s look at image quality here:
LS8_for_class_analysis %>%
filter(date == "2022-08-01") %>%
pluck("IMAGE_QUALITY_OLI") %>%
unique() %>%
unlist()
## [1] 9 9 9 9 9
That’s not helpful - the image quality is the highest it can be.
Do any of the labels have QA pixel indications of cloud or cloud shadow? The first pass here is for all data that don’t have a label of “cloud” (not just outliers). Let’s see if the low certainty classification in the QA band is useful here (there is no medium certainty for LS8/9)
LS8_for_class_analysis %>%
mutate(QA = case_when(str_sub(QA_PIXEL_binary, 1, 2) %in% c(01, 11) ~ "cirrus",
str_sub(QA_PIXEL_binary, 3, 4) %in% c(01, 11) ~ "snow/ice",
str_sub(QA_PIXEL_binary, 5, 6) %in% c(01, 11) ~ "cloud shadow",
str_sub(QA_PIXEL_binary, 7, 8) %in% c(01, 11) ~ "cloud",
TRUE ~ "clear")) %>%
group_by(QA) %>%
filter(class != "cloud") %>%
summarize(n_tot = n()) %>%
kable()
| QA | n_tot |
|---|---|
| clear | 569 |
| cloud shadow | 20 |
Low confidence is much better than medium confidence in LS8 than 5/7 - let’s check to see that the classes are the same for high confidence:
LS8_for_class_analysis %>%
mutate(QA = case_when(str_sub(QA_PIXEL_binary, 1, 2) == 11 ~ "cirrus",
str_sub(QA_PIXEL_binary, 3, 4) == 11 ~ "snow/ice",
str_sub(QA_PIXEL_binary, 5, 6) == 11 ~ "cloud shadow",
str_sub(QA_PIXEL_binary, 7, 8) == 11 ~ "cloud",
TRUE ~ "clear")) %>%
group_by(QA) %>%
filter(class != "cloud") %>%
summarize(n_tot = n()) %>%
kable()
| QA | n_tot |
|---|---|
| clear | 569 |
| cloud shadow | 20 |
They are the same! Let’s look at the cloud shadow group to see if there is anything egregious:
LS8_for_class_analysis %>%
filter(str_sub(QA_PIXEL_binary, 5, 6) == 11) %>%
group_by(date, vol_init) %>%
summarise(n_cloud_shadow = n()) %>%
arrange(-n_cloud_shadow) %>%
kable()
| date | vol_init | n_cloud_shadow |
|---|---|---|
| 2022-11-21 | HAD | 28 |
| 2016-09-01 | AMP | 3 |
| 2020-04-05 | AMP | 3 |
| 2017-09-04 | FYC | 2 |
| 2020-07-10 | MRB | 1 |
| 2022-08-01 | MRB | 1 |
We already know that the highest ranked cloud shadow scene here is also one we are going to drop, so I don’t think there is anything else to pursue here.
How many of these outliers have near-pixel clouds (as measured by ST_CDIST)?
There are 1 labels (4.8% of oultiers) that aren’t “cloud” in the outlier dataset that have a cloud distance <500m and 32 labels (3.2%) in the whole dataset that have a cloud distance <500m. Since this is about the same portion of labels (or they are not severely disproportionate), I don’t think this is terribly helpful.
How many of the outliers have high cloud cover, as reported by the scene-level metadata? Note, we don’t have the direct scene cloud cover associated with individual labels, rather a list of the scene level cloud cover values associated with the AOI.
The outlier dataset contains 0 (0%) where the max cloud cover was > 75% and 0 (0%) where the mean cloud cover was > 50%. The filtered dataset contains 0 (0%) where max was >75% and 0 (0%) where the mean cloud cover was > 50%. Welp, this is unhelpful!
Pixels can also be saturated in one or more bands, we need to make sure that the QA_RADSAT for all labels (including clouds) are set to zero.
LS8_for_class_analysis %>%
mutate(radsat = if_else(QA_RADSAT == 0,
"n",
"y")) %>%
group_by(radsat) %>%
summarize(n_tot = n()) %>%
kable()
| radsat | n_tot |
|---|---|
| n | 989 |
Great! No bands are saturated!
Landsat 8 and 9 feature an Aerosol QA band, derived from Band 1. We should look through the data here to see if any of the labels are in high aerosol QA pixels, which the USGS suggests should not be used.
LS8_for_class_analysis %>%
mutate(aerosol = if_else(str_sub(SR_QA_AEROSOL_binary, 1, 2) == 11,
"y",
"n")) %>%
group_by(aerosol) %>%
filter(class != "cloud") %>%
summarize(n_tot = n()) %>%
kable()
| aerosol | n_tot |
|---|---|
| n | 506 |
| y | 83 |
And let’s look to see when the instances of high aerosol are:
LS8_for_class_analysis %>%
mutate(aerosol = if_else(str_sub(SR_QA_AEROSOL_binary, 1, 2) == 11,
"y",
"n")) %>%
filter(aerosol == "y") %>%
group_by(date) %>%
filter(class != "cloud") %>%
summarize(n_tot = n()) %>%
arrange(-n_tot) %>%
kable()
| date | n_tot |
|---|---|
| 2020-08-11 | 32 |
| 2017-09-04 | 23 |
| 2022-08-01 | 14 |
| 2014-06-08 | 5 |
| 2022-11-21 | 5 |
| 2016-09-01 | 3 |
| 2020-07-10 | 1 |
Let’s look at the 2020-08-11 and 2017-09-04 images. First 2020-08-11:
This image is clear as day, but if you zoom in near the Apostle Islands, you can see the haze.
And 2017-09-04:
Woah! I understand now why there might be some algae bloom labels in this dataset. This is very hazy - I’m also interested in the scene quality here:
LS8_for_class_analysis %>%
filter(date == "2017-09-04") %>%
pluck("IMAGE_QUALITY_OLI") %>%
unique() %>%
unlist()
## [1] 9 9 9 9 9
Well that’s surprising. I guess this is truly an instance where we’re going to have to trust the LS8 Aerosol bit and mask out all high aerosol pixels and toss all labels that are flagged with high aerosol.
For the purposes of training data, we are going to throw out the outlier scene on 2022-11-21
LS8_training_labels <- LS8_for_class_analysis %>%
# drop the scene with outliers
filter(date != "2022-11-21",
# drop all the labels with high aerosol, unless the class is cloud
(str_sub(SR_QA_AEROSOL_binary, 1, 2) != 11 | class == "cloud"))
We do want to have an idea of how different the classes are, in regards to band data. While there are a bunch of interactions that we could get into here, for the sake of this analysis, we are going to analyze the class differences by band.
Kruskal-Wallis assumptions:
ANOVA assumptions:
We can’t entirely assert sample independence and we know that variance and distribution is different for “cloud” labels, but those data also are visibly different from the other classes.
In order to systematically test for differences between classes and be able to intepret the data, we will need to know some things about our data:
With this workflow, most classes are statistically different - below are the cases where the pairwise comparison were not deemed statistically significant:
## # A tibble: 18 × 9
## band group1 group2 n1 n2 statistic p p.adj p.adj.signif
## <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl> <chr>
## 1 SR_B2 darkNearShore… light… 110 164 1.74 0.0820 0.820 ns
## 2 SR_B2 darkNearShore… offSh… 110 155 -2.66 0.00777 0.0777 ns
## 3 SR_B3 darkNearShore… light… 110 164 2.12 0.0336 0.336 ns
## 4 SR_B4 darkNearShore… light… 110 164 0.0766 0.939 1 ns
## 5 SR_B5 darkNearShore… light… 110 164 -1.58 0.114 1 ns
## 6 SR_B5 offShoreSedim… openW… 155 70 -0.0405 0.968 1 ns
## 7 SR_B6 darkNearShore… light… 110 164 -0.405 0.686 1 ns
## 8 SR_B6 darkNearShore… offSh… 110 155 -2.55 0.0109 0.109 ns
## 9 SR_B6 darkNearShore… openW… 110 70 -2.38 0.0174 0.174 ns
## 10 SR_B6 lightNearShor… offSh… 164 155 -2.39 0.0170 0.170 ns
## 11 SR_B6 lightNearShor… openW… 164 70 -2.20 0.0280 0.280 ns
## 12 SR_B6 offShoreSedim… openW… 155 70 -0.321 0.748 1 ns
## 13 SR_B7 darkNearShore… light… 110 164 0.122 0.903 1 ns
## 14 SR_B7 darkNearShore… offSh… 110 155 -1.63 0.104 1 ns
## 15 SR_B7 darkNearShore… openW… 110 70 -1.95 0.0516 0.516 ns
## 16 SR_B7 lightNearShor… offSh… 164 155 -1.94 0.0518 0.518 ns
## 17 SR_B7 lightNearShor… openW… 164 70 -2.19 0.0286 0.286 ns
## 18 SR_B7 offShoreSedim… openW… 155 70 -0.657 0.511 1 ns
Alright, all over the map here - dark near shore is still a problem, but also some issues with offshore sediment, open water, and light near shore sediment. This could be problematic. We’ll have to see how these data look and hope that ML can pick up on the subtle differences.
DNSS: dark near shore sediment, LNSS: light near shore sediment, OSS: offshore sediment
There are definitely some varying patterns here, let’s zoom in on the sediment classes.
DNSS: dark near shore sediment, LNSS: light near shore sediment, OSS: offshore sediment
Hmm, this is even more fuzzy for
Things to note for Landsat 8:
write_rds(LS8_training_labels, paste0("data/labels/LS8_labels_for_tvt_", outlier_version, ".RDS"))